[None][fix] Correct virtual memory allocation alignment#9491
[None][fix] Correct virtual memory allocation alignment#9491Superjomn merged 1 commit intoNVIDIA:mainfrom
Conversation
📝 WalkthroughWalkthroughThe changes enhance CUDA memory allocation error reporting and refactor alignment computation in virtual memory management. A new optional Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant VirtualMemory as VirtualMemory:<br/>allocate()
participant Config as Configuration:<br/>pageAligned()
participant Atomic as mAlignment<br/>(atomic)
participant CUDA as CUDA Runtime
Caller->>VirtualMemory: allocate(n, device)
VirtualMemory->>Config: pageAligned(n, device)
Config->>Atomic: load mAlignment
alt mAlignment == 0 (uninitialized)
Config->>CUDA: cuMemGetAllocationGranularity()
CUDA-->>Config: gpu_alignment
Config->>Config: lcm(system_page_size, gpu_alignment)
Config->>Atomic: store alignment (spin-wait)
end
Config->>Config: round n to alignment
Config-->>VirtualMemory: page_aligned_size
VirtualMemory->>CUDA: cuMemAddressReserve(page_aligned_size)
alt Success
CUDA-->>VirtualMemory: reserved address
else Error
VirtualMemory->>VirtualMemory: checkDriver with info log
end
VirtualMemory-->>Caller: address
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
cpp/include/tensorrt_llm/runtime/virtualMemory.h (2)
465-495: Atomic alignment field is a good fit; ensure required headers are includedUsing
std::atomic<std::size_t> mAlignmentlazily initialized to 0 matches the newpageAligneddesign and avoids doing CUDA driver queries in the constructor. However, this header now directly depends onstd::atomicandstd::numeric_limits(used below), but only<numeric>is explicitly included.To keep the header self-contained and robust against transitive include changes, consider explicitly adding:
#include <map> #include <mutex> #include <numeric> +#include <atomic> +#include <limits> #include <unistd.h>
496-529: Let me analyze the provided code snippet and the original review to determine the validity of the suggestions.Analysis of the original review concerns:
Heterogeneous multi-GPU alignment caching: The code uses a single cached
mAlignmentthat is computed on the first device that callspageAligned(). For heterogeneous systems with different GPU types, this could theoretically be an issue, but:
- The alignment is computed using
cuMemGetAllocationGranularitywith a specificdeviceparameter- However, the cached value doesn't account for the device parameter on subsequent calls
- This is a valid concern, though it applies only if multiple different devices are used
Memory ordering semantics: The use of
memory_order_relaxedfor synchronization is adequate for the current scalar use case, but the reviewer suggests future-proofing withacquire/releaseif the function grows more complex. This is a reasonable optional suggestion.Both suggestions in the original review are optional and forward-looking, not addressing critical bugs. The code as written is correct for single-device or homogeneous multi-GPU scenarios.
Lazy, device-aware
pageAlignedimplementation is sound; minor edge considerationsThe lazy initialization using a sentinel (
loading = max_size_t), CAS, and a spin-wait withpause/yieldis correct for typical usage:
- Alignment is computed once via
cuMemGetAllocationGranularityfor the first device that calls the function, then cached.std::lcm(getpagesize(), gpuAlignment)ensures the size is valid for both OS page size and GPU allocation granularity.- Concurrent callers either compute once or spin briefly until the value is stored; no data race on
mAlignment.Two observations:
Device-specific alignment: If heterogeneous multi-GPU scenarios are used (different GPU types with different granularities), this single cached alignment ignores the
deviceparameter after first initialization. This is acceptable for homogeneous setups but worth documenting.Memory ordering: Current
memory_order_relaxedsemantics are correct for the scalar alignment value. Consideracquire/releaseonly if this function evolves to coordinate additional state beyond the alignment value itself.The implementation is correct as-is for its current usage.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cpp/include/tensorrt_llm/runtime/virtualMemory.h(4 hunks)cpp/tensorrt_llm/common/cudaDriverWrapper.h(2 hunks)cpp/tensorrt_llm/runtime/virtualMemory.cpp(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,h,cu}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,h,cu}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g.,} // namespace foo)
Preferconstorconstexprvariables over#definewhenever possible, as the latter are not visible to the compiler
A variable that is not modified after its initialization should be declared asconst
Except0(only used in comparison for checking signness/existence/emptiness) andnullptr,true,false, all other literals should only be used for variable initialization and should be replaced with named constants
Use Allman indentation style for braces in C++
Put the semicolon for an emptyfororwhileloop in a new line
The statement forming the body of aswitch,while,do .. whileorforstatement shall be a compound statement (use brace-delimited statements)
Ifandelseshould always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camel case with first letter lowercase (e.g.,thisIsASubDirandthisIsAFilename.cpp)
All filenames involved in compilation of a compilation target must have case-insensitive unique filenames
All types (including class names) should use camel case with uppercase first letter (e.g.,FooBarClass)
Local variables, methods and namespaces should use camel case with first letter lowercase (e.g.,localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by a lower case 'g' (e.g.,gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by a lower case 's' (e.g.,sMutableStaticGlobal)
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter of the name (e.g.,static std::once_flag sFlag;)
Public, private and protected class member variables should use camel case prefixed with 'm' (e.g.,mNbFooValues), though the 'm' pre...
Files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/tensorrt_llm/runtime/virtualMemory.cppcpp/include/tensorrt_llm/runtime/virtualMemory.h
**/*.h
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.h: Use a preprocessor guard in C++ header files with the guard name formatTRTLLM_followed by the filename in all caps (e.g.,TRTLLM_FOO_BAR_HELLO_Hfor fileFooBarHello.h); do not include directory names in the symbol
Do not use underscore prefix or suffix in C++ preprocessor guard symbols; they are reserved in C++ standard for compilers or implementation
Files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/include/tensorrt_llm/runtime/virtualMemory.h
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/tensorrt_llm/runtime/virtualMemory.cppcpp/include/tensorrt_llm/runtime/virtualMemory.h
🧠 Learnings (9)
📓 Common learnings
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1185-1189
Timestamp: 2025-08-25T00:03:39.294Z
Learning: TLLM_CHECK_WITH_INFO is a host-side utility function and cannot be called from CUDA device functions (those marked with __device__ or __global__). In device code, assert() is the primary mechanism for handling "should never happen" conditions, and like standard C++ assert, CUDA's assert only works in debug builds and is compiled out in release builds.
📚 Learning: 2025-08-25T00:03:39.294Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1185-1189
Timestamp: 2025-08-25T00:03:39.294Z
Learning: TLLM_CHECK_WITH_INFO is a host-side utility function and cannot be called from CUDA device functions (those marked with __device__ or __global__). In device code, assert() is the primary mechanism for handling "should never happen" conditions, and like standard C++ assert, CUDA's assert only works in debug builds and is compiled out in release builds.
Applied to files:
cpp/tensorrt_llm/common/cudaDriverWrapper.h
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.
Applied to files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/include/tensorrt_llm/runtime/virtualMemory.h
📚 Learning: 2025-11-14T11:22:03.729Z
Learnt from: nzmora-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 9163
File: tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py:107-113
Timestamp: 2025-11-14T11:22:03.729Z
Learning: In TensorRT-LLM AutoDeploy custom ops, when adding hardware capability checks to select between kernel implementations (e.g., cuBLAS vs. CUDA kernel), use descriptive variable names that identify the specific GPU architectures or families being targeted (e.g., `is_blackwell_geforce_or_ada`) rather than generic names like `enable_cuda_core`. This makes it clear that the code is selecting an implementation path based on hardware capabilities, not enabling/disabling hardware features.
Applied to files:
cpp/tensorrt_llm/common/cudaDriverWrapper.h
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.
Applied to files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/tensorrt_llm/runtime/virtualMemory.cppcpp/include/tensorrt_llm/runtime/virtualMemory.h
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/config.cu), std::ostringstream is used but <sstream> doesn't need to be explicitly included because it's provided transitively through other headers like tensorrt_llm/common/cudaUtils.h or config.h. Local compilation testing confirms this works without the explicit include.
Applied to files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/include/tensorrt_llm/runtime/virtualMemory.h
📚 Learning: 2025-11-24T17:09:17.870Z
Learnt from: CR
Repo: NVIDIA/TensorRT-LLM PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:09:17.870Z
Learning: Applies to **/*.h : Use a preprocessor guard in C++ header files with the guard name format `TRTLLM_` followed by the filename in all caps (e.g., `TRTLLM_FOO_BAR_HELLO_H` for file `FooBarHello.h`); do not include directory names in the symbol
Applied to files:
cpp/tensorrt_llm/common/cudaDriverWrapper.hcpp/include/tensorrt_llm/runtime/virtualMemory.h
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.
Applied to files:
cpp/tensorrt_llm/runtime/virtualMemory.cpp
📚 Learning: 2025-11-24T17:09:17.870Z
Learnt from: CR
Repo: NVIDIA/TensorRT-LLM PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:09:17.870Z
Learning: Applies to **/*.cu : CUDA code must be compiled with a CUDA compiler and includes declarations/definitions with CUDA keywords (`__device__`, `__managed__`, `__constant__`, `__global__`), device functions, and kernel launching with <<<...>>> syntax
Applied to files:
cpp/include/tensorrt_llm/runtime/virtualMemory.h
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (9)
cpp/tensorrt_llm/common/cudaDriverWrapper.h (2)
143-162: ExtendedcheckDriverinfo parameter looks correct and preserves existing behaviorThe added
infoparameter cleanly augments the error message when non-null and defaults tonullptrso existing call sites remain source-compatible. The conditional formatting branch is straightforward and still falls back to the previous message structure wheninfois absent. No functional issues spotted here.
187-192:TLLM_CU_CHECK_WITH_INFOis well-integrated; consider usage constraintsThe new macro correctly preformats the info string and passes it into
checkDriver, giving richer context on CUDA driver failures. Lifetime of thefmtstr(...).c_str()argument is safe for the duration of the call. Just keep in mind this remains a host-side utility (like other TLLM_*CHECK macros) and should not be invoked from device code.cpp/include/tensorrt_llm/runtime/virtualMemory.h (1)
203-239: Improved cuMemCreate error reporting inLocalCreator::createSwitching to
TLLM_CU_CHECK_WITH_INFOforcuMemCreatewith"allocating %zu bytes of memory"andmSizegives exactly the extra context you want when allocations fail, without changing semantics. The memory accounting that follows still uses the samemSize, so behavior remains consistent.cpp/tensorrt_llm/runtime/virtualMemory.cpp (6)
49-60: More informative status checks inmaterializeUsing
TLLM_CHECK_WITH_INFOwith the current status value before materializing a chunk is an improvement over a plain assert-style check. It keeps the control flow unchanged but gives clear diagnostics if materialize is called in an invalid state.
88-115: Release-path status validation is clearer and saferThe new
TLLM_CHECK_WITH_INFOin_releaseprecisely documents valid states (MATERIALIZEDorERROREDwith non-INVALID_STATE) and reports the actual status on misuse. This doesn’t change behavior when called correctly but improves failure-mode debuggability.
153-172: Manageraddnow surfaces detailed misuse informationSwitching these internal checks in
CudaVirtualMemoryManager::addtoTLLM_CHECK_WITH_INFO(status validation and duplicate-handle detection) is a straight upgrade in error reporting and doesn’t alter the success path. The ScopeGuard rollback logic remains intact.
339-380: Alignment fix inallocatecorrectly ties VA reservation, mapping, and allocation to GPU granularityThe revised allocation path looks correct and addresses the original bug:
pageAlignedSize = mConfig->pageAligned(n, device)ensures the size passed to:
cuMemAddressReserve,UnicastConfigurator(forcuMemMap/cuMemUnmap),- and
LocalCreator/cuMemCreate
is a common multiple of both system page size and the GPU’s allocation granularity.TLLM_CU_CHECK_WITH_INFOaroundcuMemAddressReservewith"allocating %zu bytes of address space"provides precise context on failures.- The configurators that operate on logical payload (
MemsetConfigurator,OffloadConfigurator) still usen, so they only touch/backup the intended region inside the larger aligned allocation, which is fine.Taken together, the VA reservation, mapping size, and physical allocation size are now consistent and satisfy the GPU alignment requirements.
You may want to run a quick smoke test on devices with non-trivial
cuMemGetAllocationGranularity(e.g., large-bar configurations) and verify thatcuMemCreate,cuMemMap, andcuMemAddressReserve/Freeall succeed for a variety of non-alignednvalues.
382-389: Deallocate remains consistent with new alignment strategy
deallocatenow relies onmConfig->pageAligned(n)forcuMemAddressFree, which uses the same cached alignment value computed on first use inpageAligned. For a givenn, the aligned size used incuMemAddressReserveand incuMemAddressFreewill match, so freeing the VA range remains correct under the new device-aware alignment scheme.
417-426: Allocator configuration guard now emits helpful context on misuseThe
TLLM_CHECK_WITH_INFOinsetVirtualMemoryAllocatorthat prints the tag, mode, and stream of an already-active allocator is a useful diagnostic improvement when someone attempts to reconfigure virtual memory while it is already in use. No behavior change for the valid path, and the message should make misconfiguration much easier to track down.
f51758c to
a35aac3
Compare
|
/bot run |
|
PR_Github #25929 [ run ] triggered by Bot. Commit: |
|
PR_Github #25929 [ run ] completed with state |
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
a35aac3 to
30d67da
Compare
|
/bot run |
|
PR_Github #26120 [ run ] triggered by Bot. Commit: |
|
PR_Github #26120 [ run ] completed with state |
|
/bot run |
|
PR_Github #26244 [ run ] triggered by Bot. Commit: |
|
PR_Github #26244 [ run ] completed with state |
…VIDIA#8779) The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429) Signed-off-by: qqiao <qqiao@nvidia.com> [NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][ci] Move more test stages to use OCI machines (NVIDIA#9395) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Matt Lefebvre <matthewelefebvre@gmail.com> [None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093) Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash. This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens. Signed-off-by: eopXD <yuehtingc@nvidia.com> [https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426) Signed-off-by: mni <125171826+baize97@users.noreply.github.com> [None][fix] Mitigate test timeout issues (NVIDIA#9445) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [None][feat] Support custom chat template for tool calling (NVIDIA#9297) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586) Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com> [None][ci] waive a test (NVIDIA#9458) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438) Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com> [None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Update allowed list 2025.11.25 (NVIDIA#9468) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][ci] Waive blackwell test on spec gate. (NVIDIA#9502) Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> [https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][chore] update comments in llm_args.py (NVIDIA#9472) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473) Signed-off-by: Hui Gao <huig@nvidia.com> [None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294) Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com> [https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505) Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488) Signed-off-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> Co-authored-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> [None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com> [TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346) Signed-off-by: xxi <xxi@nvidia.com> [None][infra] Fix Slurm job script (NVIDIA#9508) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518) Signed-off-by: eopXD <yuehtingc@nvidia.com> [TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [None] [chore] Update to cutlass 4.3 (NVIDIA#8637) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> [https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533) Signed-off-by: Lanyu Liao <lancelly@users.noreply.github.com> Co-authored-by: Lanyu Liao <lancelly@users.noreply.github.com> [None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477) Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com> Signed-off-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537) Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> [None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494) Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com> Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> Co-authored-by: Zheyu Fu <zheyuf@nvidia.com> [None][feat] support for more accurate AR calculation (NVIDIA#9323) Signed-off-by: binghanc <176802681+binghanc@users.noreply.github.com> [TRTLLM-9488][fix] llmapi references (NVIDIA#9547) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143) Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522) Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> Signed-off-by: qgai <qgai@nvidia.com> Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> Signed-off-by: Simeng Liu <simengl@nvidia.com> Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Signed-off-by: Vincent Zhang <vinczhang@nvidia.com> Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com> Signed-off-by: Michal Guzek <mguzek@nvidia.com> Signed-off-by: Michal Guzek <moraxu@users.noreply.github.com> Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> Signed-off-by: leslie-fang25 <leslief@nvidia.com> Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Co-authored-by: yunruis <205571022+yunruis@users.noreply.github.com> Co-authored-by: sunnyqgg <159101675+sunnyqgg@users.noreply.github.com> Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com> Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: JunyiXu-nv <219237550+JunyiXu-nv@users.noreply.github.com> Co-authored-by: Simeng Liu <109828133+SimengLiu-nv@users.noreply.github.com> Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com> Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Co-authored-by: Vincent Zhang <vcheungyi@163.com> Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com> Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com> Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com> Co-authored-by: Leslie Fang <leslief@nvidia.com> Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com> Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com> [TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Fix port conflict in disagg tests (NVIDIA#9474) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486) [None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [None][fix] Correct virtual memory allocation alignment (NVIDIA#9491) Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398) Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com> [None][chore] remove qwen3-next accuracy tests (NVIDIA#9534) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][doc] fix mtp.py typo (NVIDIA#9307) Signed-off-by: liugaoji <757394026@qq.com> [None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428) Signed-off-by: Yuhan Li <51736452+liyuhannnnn@users.noreply.github.com> [None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077) Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com> [https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140) [None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Minor change for Slurm scripts (NVIDIA#9561) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Update the pytest options after MI (NVIDIA#9579) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509) Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com> [None][chore] Defer exposing context parallel configs (NVIDIA#9552) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104) Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com> [None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262) Signed-off-by: Yuening Li <62227368+Yuening-wa@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][feat] Unify nvfp4 gemm backend (NVIDIA#8963) Signed-off-by: Shijie Wang <jaywan@nvidia.com> Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> Signed-off-by: Shijie <jaywan@nvidia.com> Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Waive gb200 (NVIDIA#9580) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> [FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9588) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Waive failed cases for main branch (NVIDIA#9615) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Update allowlist 2025/12/01 (NVIDIA#9616) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620) Signed-off-by: qqiao <qqiao@nvidia.com> Lock the gpu clocks in L0 perf tests (NVIDIA#9585) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][fix] Extract GPU count from single-node stage names (NVIDIA#9599) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com> [https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> Signed-off-by: Thor Johnsen <41591019+thorjohnsen@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889) Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465) * Why? We would like to show an alternative to monkey-patching in AutoDeploy. * What? This commit builds on the existing custom model implementation for NemotronH and adds the bits relevant for MoE layers. Part of NVIDIA#9150. Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels (NVIDIA#9551) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633) Signed-off-by: Yu Chi Li <yuchil@nvidia.com> [None][chore] Waive test failing on pre-merge (NVIDIA#9638) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607) Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com> [TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][test] Remove duplicate test cases (NVIDIA#9623) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9593) Signed-off-by: Jie Li <lijie@nvidia.com> Co-authored-by: Jie Li <lijie@nvidia.com> [TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510) Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com> [None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253) Signed-off-by: Michal Guzek <mguzek@nvidia.com> [None][fix] Fix wide ep MoE error (NVIDIA#9642) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483) Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com> [OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698) Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com> [NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669) [TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381) Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn> [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556) Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com> Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com> Co-authored-by: Neta Zmora <nzmora@nvidia.com> [None][ci] unwaive tests (NVIDIA#9651) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225) Signed-off-by: Yoray Zack <62789610+zackyoray@users.noreply.github.com> Signed-off-by: zackyoray <yorayz@nvidia.com> [None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9662) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Add container notices and documentation (NVIDIA#9185) Signed-off-by: Parker Drake <pdrake@nvidia.com> [TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Add feature docs for helix parallelism (NVIDIA#9684) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Added line about partial reuse (NVIDIA#7846) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> [TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None] [tests] Unwaive EPLB tests (NVIDIA#9625) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][refactor] Improve request processing function in sampler (NVIDIA#9671) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> [None][infra] Update allowed list 20251204 (NVIDIA#9718) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][chore] Waive flakey disagg tests (NVIDIA#9749) Signed-off-by: Mike Iovine <miovine@nvidia.com> [https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Updated Linux installation guide (NVIDIA#9485) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][fix] Fix triton moe load_weight (NVIDIA#9649) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658) Signed-off-by: xxi <xxi@nvidia.com> [TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682) Signed-off-by: Jonas Li <6110159+longlee0622@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][fix] enable hmac in RPC (NVIDIA#9745) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Several minor fixes to CI setting (NVIDIA#9765) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679) Signed-off-by: Chenjie Luo <chenjiel@nvidia.com> [None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com> [TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800) Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> [None][test] add ntp tolerance in time metrics verification (NVIDIA#9741) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> [TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645) [https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][fix] Fix two tuning cache miss issues. (NVIDIA#9743) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com> Co-authored-by: Larry Xu <197874197+LarryXFly@users.noreply.github.com> [None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773) Signed-off-by: qqiao <qqiao@nvidia.com> [None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][chore] Remove closed bugs (NVIDIA#9770) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [None][infra] update mooncake in docker images (NVIDIA#9584) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com> [None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661) Signed-off-by: qgai <qgai@nvidia.com> ray + updatew works trtllm works in async env trtllm works in sync and async env ray + updatew works rebase to the updated verl server mode still cherry pick still cherry pick still cherry pick integrated http interface hang at RyExecutor create workers ray.remote clean code use tensorrt_llm.rlhf_utils Signed-off-by: Liwei Ma <liweim@nvidia.com> placement, asyncllm, and basic tests Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> connect sleep and wakeup; Add support to pass None to update_weights Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Batching ctx for IFB scheduler Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix e2e integration Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> update asyncllm, other nits Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix init setup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Fix TRTLLMSampler logprobs perf Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> fix and cleanup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix server Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Revert "Batching ctx for IFB scheduler" This reverts commit b51aac0 Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> update & address comments Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Description
This PR fixed a bug that virtual address reservation / virtual memory allocation size may not satisfy the GPU alignment requirement.
Test Coverage
VirtualMemoryManagerTest.TestCudaVirtualMemoryAllocatorUnalignedSizePR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.